skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Park, Il Memming"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Latent Gaussian process (GP) models are widely used in neuroscience to uncover hidden state evolutions from sequential observations, mainly in neural activity recordings. While latent GP models provide a principled and powerful solution in theory, the intractable posterior in non-conjugate settings necessitates approximate inference schemes, which may lack scalability. In this work, we propose cvHM, a general inference framework for latent GP models leveraging Hida-Matérn kernels and conjugate computation variational inference (CVI). With cvHM, we are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods. The reparameterization of stationary kernels using Hida-Matérn GPs helps us connect the latent variable models that encode prior assumptions through dynamical systems to those that encode trajectory assumptions through GPs. In contrast to previous work, we use bidirectional information filtering, leading to a more concise implementation. Furthermore, we employ the Whittle approximate likelihood to achieve highly efficient hyperparameter learning. 
    more » « less
  2. Latent variable models have become instrumental in computational neuroscience for reasoning about neural computation. This has fostered the development of powerful offline algorithms for extracting latent neural trajectories from neural recordings. However, despite the potential of real-time alternatives to give immediate feedback to experimentalists, and enhance experimental design, they have received markedly less attention. In this work, we introduce the exponential family variational Kalman filter (eVKF), an online recursive Bayesian method aimed at inferring latent trajectories while simultaneously learning the dynamical system generating them. eVKF works for arbitrary likelihoods and utilizes the constant base measure exponential family to model the latent state stochasticity. We derive a closed-form variational analog to the predict step of the Kalman filter which leads to a provably tighter bound on the ELBO compared to another online variational method. We validate our method on synthetic and real-world data, and, notably, show that it achieves competitive performance. 
    more » « less
  3. The macaque middle temporal (MT) area is well known for its visual motion selectivity and relevance to motion perception, but the possibility of it also reflecting higher-level cognitive functions has largely been ignored. We tested for effects of task performance distinct from sensory encoding by manipulating subjects' temporal evidence-weighting strategy during a direction discrimination task while performing electrophysiological recordings from groups of MT neurons in rhesus macaques (one male, one female). This revealed multiple components of MT responses that were, surprisingly, not interpretable as behaviorally relevant modulations of motion encoding, or as bottom-up consequences of the readout of motion direction from MT. The time-varying motion-driven responses of MT were strongly affected by our strategic manipulation—but with time courses opposite the subjects' temporal weighting strategies. Furthermore, large choice-correlated signals were represented in population activity distinct from its motion responses, with multiple phases that lagged psychophysical readout and even continued after the stimulus (but which preceded motor responses). In summary, a novel experimental manipulation of strategy allowed us to control the time course of readout to challenge the correlation between sensory responses and choices, and population-level analyses of simultaneously recorded ensembles allowed us to identify strong signals that were so distinct from direction encoding that conventional, single-neuron-centric analyses could not have revealed or properly characterized them. Together, these approaches revealed multiple cognitive contributions to MT responses that are task related but not functionally relevant to encoding or decoding of motion for psychophysical direction discrimination, providing a new perspective on the assumed status of MT as a simple sensory area. SIGNIFICANCE STATEMENTThis study extends understanding of the middle temporal (MT) area beyond its representation of visual motion. Combining multineuron recordings, population-level analyses, and controlled manipulation of task strategy, we exposed signals that depended on changes in temporal weighting strategy, but did not manifest as feedforward effects on behavior. This was demonstrated by (1) an inverse relationship between temporal dynamics of behavioral readout and sensory encoding, (2) a choice-correlated signal that always lagged the stimulus time points most correlated with decisions, and (3) a distinct choice-correlated signal after the stimulus. These findings invite re-evaluation of MT for functions outside of its established sensory role and highlight the power of experimenter-controlled changes in temporal strategy, coupled with recording and analysis approaches that transcend the single-neuron perspective. 
    more » « less
  4. Bizley, Jennifer K. (Ed.)
    Brain asymmetry in the sensitivity to spectrotemporal modulation is an established functional feature that underlies the perception of speech and music. The left auditory cortex (ACx) is believed to specialize in processing fast temporal components of speech sounds, and the right ACx slower components. However, the circuit features and neural computations behind these lateralized spectrotemporal processes are poorly understood. To answer these mechanistic questions we use mice, an animal model that captures some relevant features of human communication systems. In this study, we screened for circuit features that could subserve temporal integration differences between the left and right ACx. We mapped excitatory input to principal neurons in all cortical layers and found significantly stronger recurrent connections in the superficial layers of the right ACx compared to the left. We hypothesized that the underlying recurrent neural dynamics would exhibit differential characteristic timescales corresponding to their hemispheric specialization. To investigate, we recorded spike trains from awake mice and estimated the network time constants using a statistical method to combine evidence from multiple weak signal-to-noise ratio neurons. We found longer temporal integration windows in the superficial layers of the right ACx compared to the left as predicted by stronger recurrent excitation. Our study shows substantial evidence linking stronger recurrent synaptic connections to longer network timescales. These findings support speech processing theories that purport asymmetry in temporal integration is a crucial feature of lateralization in auditory processing. 
    more » « less
  5. Nonlinear state-space models are powerful tools to describe dynamical structures in complex time series. In a streaming setting where data are processed one sample at a time, simultaneous inference of the state and its nonlinear dynamics has posed significant challenges in practice. We develop a novel online learning framework, leveraging variational inference and sequential Monte Carlo, which enables flexible and accurate Bayesian joint filtering. Our method provides an approximation of the filtering posterior which can be made arbitrarily close to the true filtering distribution for a wide class of dynamics models and observation models. Specifically, the proposed framework can efficiently approximate a posterior over the dynamics using sparse Gaussian processes, allowing for an interpretable model of the latent dynamics. Constant time complexity per sample makes our approach amenable to online learning scenarios and suitable for real-time applications. 
    more » « less
  6. null (Ed.)
    In recent years, the efficacy of using artificial recurrent neural networks to model cortical dynamics has been a topic of interest. Gated recurrent units (GRUs) are specialized memory elements for building these recurrent neural networks. Despite their incredible success in natural language, speech, video processing, and extracting dynamics underlying neural data, little is understood about the specific dynamics representable in a GRU network, and how these dynamics play a part in performance and generalization. As a result, it is both difficult to know a priori how successful a GRU network will perform on a given task, and also their capacity to mimic the underlying behavior of their biological counterparts. Using a continuous time analysis, we gain intuition on the inner workings of GRU networks. We restrict our presentation to low dimensions, allowing for a comprehensive visualization. We found a surprisingly rich repertoire of dynamical features that includes stable limit cycles (nonlinear oscillations), multi-stable dynamics with various topologies, and homoclinic bifurcations. At the same time GRU networks are limited in their inability to produce continuous attractors, which are hypothesized to exist in biological neural networks. We contextualize the usefulness of different kinds of observed dynamics and support our claims experimentally. 
    more » « less
  7. null (Ed.)
  8. Brain dynamics can exhibit narrow-band nonlinear oscillations and multistability. For a subset of disorders of consciousness and motor control, we hypothesized that some symptoms originate from the inability to spontaneously transition from one attractor to another. Using external perturbations, such as electrical pulses delivered by deep brain stimulation devices, it may be possible to induce such transition out of the pathological attractors. However, the induction of transition may be non-trivial, rendering the current open-loop stimulation strategies insufficient. In order to develop next-generation neural stimulators that can intelligently learn to induce attractor transitions, we require a platform to test the efficacy of such systems. To this end, we designed an analog circuit as a model for the multistable brain dynamics. The circuit spontaneously oscillates stably on two periods as an instantiation of a 3-dimensional continuous-time gated recurrent neural network. To discourage simple perturbation strategies, such as constant or random stimulation patterns from easily inducing transition between the stable limit cycles, we designed a state-dependent nonlinear circuit interface for external perturbation. We demonstrate the existence of nontrivial solutions to the transition problem in our circuit implementation. 
    more » « less
  9. Marinazzo, Daniele (Ed.)